-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✅ Make embedding layer output Float32 #274
Conversation
Thanks for this. Looking at this again, I'm a little confused as to why it's even necessary. According to this line the embedding layer weights are initialised using Okay, I think the problem is that the integers you provide as inputs to the embedding layers must be Float64, and we're getting Float32 * Float64 = Float64. So, perhaps the more correct solution is to ensure the integers levels are Float32, and not Float64, which would be fewer conversions than in the current PR. Does this sound right @EssamWisam? |
Well, for the code in this PR:
Thus, it must be that in the other PR the input batch |
In your embedding, you first convert a level, such as My question is, do you send |
Oh I misunderstood what you were refering to as integers, you are talking about the ordinal encoder. It's quite easy to amke them Float32 upon generation but would you prefer that I do this here or to that, only, in the other PR to guarantee it has the intended effect? |
Yes, this is my preference. Can you do it here, so I don't have to find the relevant code? I think if the current tests pass, then that ought to fix my issue at the other PR, but of course I'll check before merging this one. |
Could you check now? |
In response to issue #273, this casts the embedding layer's output to
Float32
and makes a small test for it. @ablaom is this how you want this to be implemented? Likely, I can annotate the return type so that it rather casts implictly if it is what you prefer.